Non-Binding Agreements Shape the Future of AI Governance
Explore how non-binding agreements are guiding the global landscape of AI governance, ensuring ethical and inclusive development.
The Role of Non-Binding Agreements in AI Governance
In January 2025, the UK Government unveiled a blueprint to turbocharge the use of artificial intelligence (AI). However, the principles guiding international and state actors' use of AI remain a central topic in global conversations. International organisations and governments worldwide are actively working on both non-binding and binding principles for multilateral AI governance.
The Knowledge for Development and Diplomacy programme (K4DD) has released a report on multilateral technology governance. The report highlights the significance of non-binding agreements in shaping the future of AI.
The Landscape of AI Governance
AI governance mechanisms can be broadly categorized into binding and non-binding. Non-binding governance is crucial in multilateral regulation because it is quicker to adopt and carries no built-in cost for non-compliance. This flexibility attracts more participants and makes non-binding regulations more adaptable to the fast-paced changes in technology.
Several key international organisations are involved in AI governance, including the Organization for Economic Cooperation and Development (OECD), the United Nations Educational, Scientific, and Cultural Organisation (UNESCO), and the International Telecommunications Union (ITU).
UNESCO's Influence
UNESCO stands out as the most influential actor, with 193 signatories adopting its recommendations. These recommendations cover a broad range of issues, including surveillance, oversight, data protection, and the environment. Although non-binding, their widespread adoption makes them among the most inclusive and effective AI governance mechanisms.
OECD's AI Principles
The OECD's governance efforts are also significant, with 38 Member States and eight non-member states adopting its definition of AI. The OECD AI Principles include five key recommendations: inclusive growth, human rights, transparency, robustness, and accountability. These principles are designed to be practical and flexible, standing the test of time.
G7's Policy Guidance
The G7 has also focused on AI, with the 2023 Hiroshima Process on Generative Artificial Intelligence and the 2024 Hiroshima AI Process Friends Group at the OECD. These initiatives aim to address the opportunities and risks associated with AI.
Inter-Agency Working Group on Artificial Intelligence
The ITU, in collaboration with UNESCO, hosts the Inter-Agency Working Group on Artificial Intelligence (IAWG-AI). This group brings together expertise within the United Nations system on AI ethics and supports capacity development. The IAWG-AI is responsible for producing the UN System White Paper on AI Governance, which serves as an influential actor in shaping UN programming and outreach on AI.
UNESCO’s Recommendations on AI Ethics
UNESCO's recommendations on the Ethics of AI, introduced in 2021, are the most widely adopted globally, applicable to all 194 of UNESCO’s Member States. The recommendations emphasize human rights, peaceful societies, inclusiveness, and environmental sustainability. UNESCO also conducts state readiness assessments and hosts the Global AI Ethics and Governance Observatory, providing policy guidance through research and best practices.
The Role of Standard-Setting Organisations
Standard-setting plays a crucial role in the international governance of AI. The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) are key players in this domain. They jointly co-host the Committee on Artificial Intelligence – SC 42, which focuses on standardization in AI. Founded in 2017, SC 42 operates on a one-country, one-vote policy and has five working groups: Foundational standards, Big Data, Trustworthiness, Use cases and applications, and computational approaches and characteristics of AI.
The Importance of Non-Binding Mechanisms
As AI continues to evolve, non-binding mechanisms—such as those leveraged by the OECD, UNESCO, and standards-setting bodies—will be well-positioned to address the challenges and opportunities arising from AI. These mechanisms are more adaptable and likely to bring a broader range of actors to the table, including civil society and businesses. While binding regulations like the EU’s AI Act are also crucial, their slower adaptation process makes non-binding mechanisms the preferred choice for the moment.
Non-binding agreements are essential in ensuring that AI development is ethical, inclusive, and aligned with global values.
Frequently Asked Questions
What are the main principles of AI governance?
AI governance principles include inclusive growth, human rights, transparency, robustness, and accountability. These principles ensure that AI development is ethical and beneficial to society.
Why are non-binding agreements important in AI governance?
Non-binding agreements are important because they are quicker to adopt, have no built-in cost for non-compliance, and are more adaptable to the fast-paced changes in technology.
Which international organizations are key players in AI governance?
Key players in AI governance include the OECD, UNESCO, ITU, and the G7. These organizations work on both binding and non-binding principles to guide AI development.
What are UNESCO's recommendations on AI ethics?
UNESCO's recommendations on AI ethics emphasize human rights, peaceful societies, inclusiveness, and environmental sustainability. They are the most widely adopted globally, applicable to all 194 of UNESCO’s Member States.
How do standard-setting organizations contribute to AI governance?
Standard-setting organizations like the ISO and IEC contribute to AI governance by creating voluntary standards that ensure AI systems are safe, reliable, and ethical. They operate through consensus-based committees and working groups.